Recently, great progress has been made in single-image super-resolution (SISR) based on deep learning technology. However, the existing methods usually require a large computational cost. Meanwhile, the activation function will cause some features of the intermediate layer to be lost. Therefore, it is a challenge to make the model lightweight while reducing the impact of intermediate feature loss on the reconstruction quality. In this paper, we propose a Feature Interaction Weighted Hybrid Network (FIWHN) to alleviate the above problem. Specifically, FIWHN consists of a series of novel Wide-residual Distillation Interaction Blocks (WDIB) as the backbone, where every third WDIBs form a Feature shuffle Weighted Group (FSWG) by mutual information mixing and fusion. In addition, to mitigate the adverse effects of intermediate feature loss on the reconstruction results, we introduced a well-designed Wide Convolutional Residual Weighting (WCRW) and Wide Identical Residual Weighting (WIRW) units in WDIB, and effectively cross-fused features of different finenesses through a Wide-residual Distillation Connection (WRDC) framework and a Self-Calibrating Fusion (SCF) unit. Finally, to complement the global features lacking in the CNN model, we introduced the Transformer into our model and explored a new way of combining the CNN and Transformer. Extensive quantitative and qualitative experiments on low-level and high-level tasks show that our proposed FIWHN can achieve a good balance between performance and efficiency, and is more conducive to downstream tasks to solve problems in low-pixel scenarios.
translated by 谷歌翻译
Conversational recommender systems (CRS) aim to employ natural language conversations to suggest suitable products to users. Understanding user preferences for prospective items and learning efficient item representations are crucial for CRS. Despite various attempts, earlier studies mostly learned item representations based on individual conversations, ignoring item popularity embodied among all others. Besides, they still need support in efficiently capturing user preferences since the information reflected in a single conversation is limited. Inspired by collaborative filtering, we propose a collaborative augmentation (COLA) method to simultaneously improve both item representation learning and user preference modeling to address these issues. We construct an interactive user-item graph from all conversations, which augments item representations with user-aware information, i.e., item popularity. To improve user preference modeling, we retrieve similar conversations from the training corpus, where the involved items and attributes that reflect the user's potential interests are used to augment the user representation through gate control. Extensive experiments on two benchmark datasets demonstrate the effectiveness of our method. Our code and data are available at https://github.com/DongdingLin/COLA.
translated by 谷歌翻译
In recent years, object detection has achieved a very large performance improvement, but the detection result of small objects is still not very satisfactory. This work proposes a strategy based on feature fusion and dilated convolution that employs dilated convolution to broaden the receptive field of feature maps at various scales in order to address this issue. On the one hand, it can improve the detection accuracy of larger objects. On the other hand, it provides more contextual information for small objects, which is beneficial to improving the detection accuracy of small objects. The shallow semantic information of small objects is obtained by filtering out the noise in the feature map, and the feature information of more small objects is preserved by using multi-scale fusion feature module and attention mechanism. The fusion of these shallow feature information and deep semantic information can generate richer feature maps for small object detection. Experiments show that this method can have higher accuracy than the traditional YOLOv3 network in the detection of small objects and occluded objects. In addition, we achieve 32.8\% Mean Average Precision on the detection of small objects on MS COCO2017 test set. For 640*640 input, this method has 88.76\% mAP on the PASCAL VOC2012 dataset.
translated by 谷歌翻译
Error correction in automatic speech recognition (ASR) aims to correct those incorrect words in sentences generated by ASR models. Since recent ASR models usually have low word error rate (WER), to avoid affecting originally correct tokens, error correction models should only modify incorrect words, and therefore detecting incorrect words is important for error correction. Previous works on error correction either implicitly detect error words through target-source attention or CTC (connectionist temporal classification) loss, or explicitly locate specific deletion/substitution/insertion errors. However, implicit error detection does not provide clear signal about which tokens are incorrect and explicit error detection suffers from low detection accuracy. In this paper, we propose SoftCorrect with a soft error detection mechanism to avoid the limitations of both explicit and implicit error detection. Specifically, we first detect whether a token is correct or not through a probability produced by a dedicatedly designed language model, and then design a constrained CTC loss that only duplicates the detected incorrect tokens to let the decoder focus on the correction of error tokens. Compared with implicit error detection with CTC loss, SoftCorrect provides explicit signal about which words are incorrect and thus does not need to duplicate every token but only incorrect tokens; compared with explicit error detection, SoftCorrect does not detect specific deletion/substitution/insertion errors but just leaves it to CTC loss. Experiments on AISHELL-1 and Aidatatang datasets show that SoftCorrect achieves 26.1% and 9.4% CER reduction respectively, outperforming previous works by a large margin, while still enjoying fast speed of parallel generation.
translated by 谷歌翻译
How do we know when the predictions made by a classifier can be trusted? This is a fundamental problem that also has immense practical applicability, especially in safety-critical areas such as medicine and autonomous driving. The de facto approach of using the classifier's softmax outputs as a proxy for trustworthiness suffers from the over-confidence issue; while the most recent works incur problems such as additional retraining cost and accuracy versus trustworthiness trade-off. In this work, we argue that the trustworthiness of a classifier's prediction for a sample is highly associated with two factors: the sample's neighborhood information and the classifier's output. To combine the best of both worlds, we design a model-agnostic post-hoc approach NeighborAgg to leverage the two essential information via an adaptive neighborhood aggregation. Theoretically, we show that NeighborAgg is a generalized version of a one-hop graph convolutional network, inheriting the powerful modeling ability to capture the varying similarity between samples within each class. We also extend our approach to the closely related task of mislabel detection and provide a theoretical coverage guarantee to bound the false negative. Empirically, extensive experiments on image and tabular benchmarks verify our theory and suggest that NeighborAgg outperforms other methods, achieving state-of-the-art trustworthiness performance.
translated by 谷歌翻译
Query-focused summarization has been considered as an important extension for text summarization. It aims to generate a concise highlight for a given query. Different from text summarization, query-focused summarization has long been plagued by the problem of lacking high-quality large-scale datasets. In this paper, we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few-shot learning in query-focused summarization. Here, we propose prefix-merging, a prefix-based pretraining strategy for few-shot learning in query-focused summarization. Drawn inspiration from prefix-tuning, we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query-focused summarization. With only a small amount of trainable parameters, prefix-merging outperforms fine-tuning on query-focused summarization. We further discuss the influence of different prefix designs and propose a visualized explanation for how prefix-merging works.
translated by 谷歌翻译
善解人意的回应的任务旨在了解说话者对自己的经历表达的感觉,然后适当地回复演讲者。为了解决任务,必须对话的内容情绪对偶性进行建模,该对话是由内容视图组成的(即描述了哪些个人经历​​)和情感观点(即,演讲者对这些经验的感觉)。为此,我们设计了一个框架,以通过分离促进响应生成来建模内容情感二元性(CEDUAL)。有了分解,我们从内容和情感视图中编码对话历史,然后根据删除表示形式产生善解人意的响应,从而可以将对话历史记录的内容和情感信息嵌入到生成的响应中。基准数据集促进性的实验表明,cedual模型在自动和人类指标上都达到了最先进的性能,并且它还比以前的方法产生更多的促进响应。
translated by 谷歌翻译
传统同型估计管道包括四个主要步骤:特征检测,功能匹配,离群拆除和转换估计。最近的深度学习模型旨在使用单个卷积网络解决同型估计问题。尽管这些模型以端到端的方式进行了培训以简化同型估计问题,但它们缺乏功能匹配步骤和/或离群拆卸步骤,这是传统同型估计管道中的重要步骤。在本文中,我们试图建立一个深度学习模型,该模型模仿传统同型估计管道中的所有四个步骤。特别是,使用成本量技术实现功能匹配步骤。为了消除成本量的异常值,我们将此离群值的删除问题视为一个降解问题,并提出了一种新颖的自我监督损失来解决该问题。关于合成和真实数据集的广泛实验表明,所提出的模型的表现优于现有的深度学习模型。
translated by 谷歌翻译
在基于变压器的模型中通常观察到令牌均匀性,在经过变压器中经过堆叠的多个自我发场层后,不同的令牌共享大量相似信息。在本文中,我们建议使用每个变压器层的输出的奇异值的分布来表征令牌均匀性的现象,并从经验上说明,偏斜的奇异值分布可以减轻“令牌均匀性”问题。基于我们的观察结果,我们定义了奇异值分布的几种理想特性,并提出了一种新的转换函数,以更新奇异值。我们表明,除了减轻令牌均匀性外,转换功能还应保留原始嵌入空间中的当地邻域结构。我们提出的奇异价值变换函数应用于伯特,阿尔伯特,罗伯塔和德文尔特等一系列基于变压器的语言模型,并且在语义文本相似性评估和一系列胶水任务中观察到了改善的性能。我们的源代码可在https://github.com/hanqi-qi/tokenuni.git上找到。
translated by 谷歌翻译
随着视频数量的越来越多,对技术的需求很大,可以帮助人们迅速导航到他们感兴趣的视频片段。但是,当前的视频理解主要理解主要是视频内容摘要,而几乎没有努力,而对探索视频的结构。受文本轮廓生成的启发,我们介绍了一项新颖的视频理解任务,即视频大纲生成(VOG)。该任务定义为包含两个子任务:(1)首先根据内容结构对视频进行分割,然后(2)为每个段生成一个标题。要学习和评估VOG,我们注释了一个10K+数据集,称为Duvog。具体来说,我们使用OCR工具来识别视频的字幕。然后,要求注释者将字幕分为章节,并将每个章节分为标题。在视频中,突出显示的文本往往是标题,因为它更有可能引起人们的注意。因此,我们提出了一个视觉字幕功能增强的视频大纲生成模型(VSENET),该模型将文本字幕及其视觉字体大小和位置作为输入。我们将VOG任务视为一个序列标记问题,该问题提取了跨标题的位置,然后将其重写以形成最终大纲。此外,基于视频概述和文本概述之间的相似性,我们使用大量文章带有章节标题来预先我们的模型。 Duvog上的实验表明,我们的模型在很大程度上胜过其他基线方法,对于视频分割水平达到了77.1的F1得分,对于标题生成级别的Rouge-L_F0.5的85.0。
translated by 谷歌翻译